Security News
cURL Project and Go Security Teams Reject CVSS as Broken
cURL and Go security teams are publicly rejecting CVSS as flawed for assessing vulnerabilities and are calling for more accurate, context-aware approaches.
The memize npm package is a simple utility for memoizing functions. Memoization is a technique used to speed up function execution by caching the results of expensive function calls and returning the cached result when the same inputs occur again.
Basic Memoization
This feature allows you to memoize a function so that it caches the results of expensive function calls. When the same inputs are provided again, the cached result is returned instead of recomputing the result.
const memize = require('memize');
function expensiveFunction(x) {
console.log('Computing...');
return x * x;
}
const memoizedFunction = memize(expensiveFunction);
console.log(memoizedFunction(5)); // Computing... 25
console.log(memoizedFunction(5)); // 25 (cached result)
Custom Cache Size
This feature allows you to specify a custom cache size for the memoized function. Once the cache size is exceeded, the oldest cached results are discarded.
const memize = require('memize');
function expensiveFunction(x) {
console.log('Computing...');
return x * x;
}
const memoizedFunction = memize(expensiveFunction, { maxSize: 3 });
console.log(memoizedFunction(5)); // Computing... 25
console.log(memoizedFunction(6)); // Computing... 36
console.log(memoizedFunction(7)); // Computing... 49
console.log(memoizedFunction(5)); // 25 (cached result)
console.log(memoizedFunction(8)); // Computing... 64
console.log(memoizedFunction(6)); // 36 (cached result)
console.log(memoizedFunction(7)); // 49 (cached result)
console.log(memoizedFunction(5)); // Computing... 25 (cache size exceeded, recomputed)
Lodash's memoize function provides similar functionality to memize, allowing you to cache the results of function calls. It also allows for custom cache resolvers to determine the cache key. Compared to memize, lodash.memoize is part of the larger lodash utility library, which offers a wide range of other utility functions.
fast-memoize is another library focused on memoization. It is designed to be very fast and efficient, with a focus on performance. It offers similar functionality to memize but is optimized for speed, making it a good choice for performance-critical applications.
memoizee is a highly configurable memoization library that offers a wide range of options, including support for multiple arguments, custom cache resolvers, and cache expiration. It provides more advanced features compared to memize, making it suitable for more complex use cases.
Memize is a unabashedly-barebones memoization library with an aim toward speed. By all accounts, Memize is the fastest memoization implementation in JavaScript (see benchmarks, how it works). It supports multiple arguments, including non-primitive arguments (by reference). All this weighing in at less than 0.3kb minified and gzipped, with no dependencies.
Simply pass your original function as an argument to Memize. The return value is a new, memoized function.
function fibonacci( number ) {
if ( number < 2 ) {
return number;
}
return fibonacci( number - 1 ) + fibonacci( number - 2 );
}
var memoizedFibonacci = memize( fibonacci );
memoizedFibonnaci( 8 ); // Invoked, cached, and returned
memoizedFibonnaci( 8 ); // Returned from cache
memoizedFibonnaci( 5 ); // Invoked, cached, and returned
memoizedFibonnaci( 8 ); // Returned from cache
Using npm as a package manager:
npm install memize
Otherwise, download a pre-built copy from unpkg:
https://unpkg.com/memize/dist/memize.min.js
Memize accepts a function to be memoized, and returns a new memoized function.
memize( fn: Function, options: ?{
maxSize?: number
} ): Function
Optionally, pass an options object with maxSize
defining the maximum size of the cache.
The memoized function exposes a clear
function if you need to reset the cache:
memoizedFn.clear();
The following benchmarks are performed in Node 10.16.0 on a MacBook Pro (2019), 2.4 GHz 8-Core Intel Core i9, 32 GB 2400 MHz DDR4 RAM.
Single argument
Name | Ops / sec | Relative margin of error | Sample size |
---|---|---|---|
fast-memoize | 360,812,575 | ± 0.55% | 87 |
memize | 128,909,282 | ± 1.06% | 87 |
moize | 102,858,648 | ± 0.66% | 88 |
lru-memoize | 71,589,564 | ± 0.90% | 88 |
lodash | 49,575,743 | ± 1.00% | 88 |
underscore | 35,805,268 | ± 0.86% | 88 |
memoizee | 35,357,004 | ± 0.55% | 87 |
moize (serialized) | 27,246,184 | ± 0.88% | 87 |
memoizerific | 8,647,735 | ± 0.91% | 91 |
ramda | 8,011,334 | ± 0.74% | 90 |
memoizejs | 2,111,745 | ± 0.52% | 88 |
* Note: fast-memoize
uses Function.length
to optimize for singular argument functions, which can yield unexpected behavior if not account for.
Multiple arguments (primitive)
Name | Ops / sec | Relative margin of error | Sample size |
---|---|---|---|
memize | 81,460,517 | ± 0.61% | 88 |
moize | 66,896,395 | ± 0.90% | 83 |
lru-memoize | 26,315,198 | ± 1.26% | 85 |
memoizee | 18,237,056 | ± 0.60% | 90 |
moize (serialized) | 15,207,105 | ± 0.78% | 84 |
memoizerific | 6,363,555 | ± 0.63% | 88 |
memoizejs | 1,764,673 | ± 0.57% | 90 |
fast-memoize | 1,560,421 | ± 0.72% | 87 |
Multiple arguments (non-primitive)
Name | Ops / sec | Relative margin of error | Sample size |
---|---|---|---|
memize | 79,105,918 | ± 0.81% | 86 |
moize | 62,374,610 | ± 0.55% | 87 |
lru-memoize | 24,814,747 | ± 0.54% | 89 |
memoizee | 12,119,005 | ± 0.47% | 89 |
memoizerific | 6,748,675 | ± 0.66% | 88 |
moize (serialized) | 2,027,250 | ± 1.07% | 87 |
fast-memoize | 1,263,457 | ± 1.00% | 89 |
memoizejs | 1,075,690 | ± 0.61% | 87 |
If you haven't already, feel free to glance over the source code. The code is heavily commented and should help provide substance to the implementation concepts.
Memize creates a last-in first-out stack implemented as a doubly linked list. It biases recent access favoring real-world scenarios where the function is subsequently invoked multiple times with the same arguments. The choice to implement as a linked list is due to dramatically better performance characteristics compared to Array#unshift
for surfacing an entry to the head of the list (jsperf). A downside of linked lists is inability to efficiently access arbitrary indices, but iterating from the beginning of the cache list is optimized by guaranteeing the list is sorted by recent access / insertion.
Each node in the list tracks the original arguments as an array. This acts as a key of sorts, matching arguments of the current invocation by performing a shallow equality comparison on the two arrays. Other memoization implementations often use JSON.stringify
to generate a string key for lookup in an object cache, but this benchmarks much slower than a shallow comparison (jsperf).
Finally, special care is made toward treatment of arguments
due to engine-specific deoptimizations which can occur in V8 via arguments leaking. Order is important here; we only create a shallow clone when necessary, after the cache has been checked, to avoid creating a clone unnecessarily if a cache entry exists. Looking at the code, you'd not be blamed for thinking that dropping the shallow clone would improve performance, but in fact it would slow execution by approximately 60%. This is due to how the lingering arguments
reference would carry over by reference ("leaks") in the node's args
property. Update: As of November 2019, engine improvements are such that arguments
leaking does not have as dramatic an effect. However, my testing shows that the shallow clone still performs equal or better than referencing arguments
directly, and as such the implementation has not been revised in order to achieve optimal performance in the most versions of V8.
Copyright 2018-2020 Andrew Duthie
Released under the MIT License.
v1.1.0 (2020-03-07)
FAQs
Unabashedly-barebones memoization library with an aim toward speed
The npm package memize receives a total of 127,162 weekly downloads. As such, memize popularity was classified as popular.
We found that memize demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
cURL and Go security teams are publicly rejecting CVSS as flawed for assessing vulnerabilities and are calling for more accurate, context-aware approaches.
Security News
Bun 1.2 enhances its JavaScript runtime with 90% Node.js compatibility, built-in S3 and Postgres support, HTML Imports, and faster, cloud-first performance.
Security News
Biden's executive order pushes for AI-driven cybersecurity, software supply chain transparency, and stronger protections for federal and open source systems.